Medical Director, AI Innovation and Validation, Global Patient Safety
Job Title: Medical Director, AI Innovation and Validation, Global Patient Safety
Location: New York City, NY
At AstraZeneca, we pride ourselves on crafting a collaborative culture that champions knowledge-sharing, ambitious thinking and innovation – ultimately providing employees with the opportunity to work across teams, functions and even the globe.
Recognizing the importance of individualized flexibility, our ways of working allow employees to balance personal and work commitments while ensuring we continue to create a strong culture of collaboration and teamwork by engaging face-to-face in our offices 3 days a week. Our head office is purposely designed with collaboration in mind, providing space where teams can come together to strategize, brainstorm and connect on key projects.
Our dedication to sustainability is also central to our culture and part of what makes AstraZeneca a great place to work. We know the health of people, the planet and our business are interconnected which is why we’re taking ambitious action to tackle some of the biggest challenges of our time, from climate change to access to healthcare and disease prevention.
Introduction to Role:
The Medical Director will provide experienced clinical oversight for the design, validation, and implementation of AI-driven solutions within Global Patient Safety. This physician leadership role is critical in guiding AstraZeneca's strategy for employing Artificial Intelligence to transform Patient Safety and especially its direct application to clinical safety, transforming Patient Safety’s strategic contribution to clinical development.
You would be responsible for guiding the adoption of AI into all aspects of Patient Safety, demonstrating robust validation of AI tools applied to Patient Safety, and ensuring all AI applications meet the highest standards of medical accuracy, ethical integrity, and regulatory compliance. The role involves shaping the responsible adoption of sophisticated AI, including generative technologies, to enhance pharmacovigilance capabilities such as case processing, safety surveillance, and signal detection as well as clinical assessment of safety cases, case series, and toxicity management and mitigation strategies. Success hinges on deep medical expertise, a thorough understanding of the AI ecosystem in Patient Safety, and strategic collaboration with internal cross-functional partners (including Patient Safety, R&D, IT, Clinical Development, Regulatory Affairs) and proactive engagement with Health Authorities, all to ultimately enhance patient health and safety.
Accountabilities:
- Clinical Leadership & AI Strategy in Patient Safety:
- Provide expert medical leadership and strategic direction for the integration of AI technologies within patient safety and pharmacovigilance processes.
- Champion the ethical and responsible identification, evaluation, and adoption of innovative AI technologies, including generative AI, to improve the efficiency, accuracy, and depth of patient safety insights.
- Translate complex medical and patient safety requirements into clear objectives for AI development and application, ensuring clinical relevance and utility.
- Stay abreast of advancements in AI in medicine, regulatory guidance on AI in healthcare, and industry best practices, translating these into actionable strategies for Patient Safety.
- Clinical Validation Framework & AI Governance:
- Lead the development, implementation, and continuous refinement of a comprehensive clinical validation framework for AI/ML applications in Patient Safety. This includes defining medical performance standards, bias assessment from a clinical perspective, and ensuring generalizability to relevant patient populations.
- Lead all aspects of the rigorous clinical assessment and validation of AI solutions (both internal and external), ensuring they are medically sound, safe for their intended use, and adhere to global pharmacovigilance regulations (e.g., GVP) and evolving standards for AI in healthcare.
- Establish and oversee governance processes for the AI lifecycle from a clinical perspective, including input into design, defining appropriate human oversight, monitoring for clinical performance post-deployment, and managing AI-related patient safety risks.
- Cross-Functional Collaboration & Medical Expertise:
- Serve as the primary medical subject matter expert on AI applications within Patient Safety, providing clinical insights and guidance to technical teams, data scientists, and IT professionals.
- Collaborate closely with Patient Safety Leadership, R&D, Clinical Development, Medical Affairs, IT, Data Science, Quality Assurance, and Global Regulatory Affairs to ensure AI solutions are clinically appropriate, seamlessly integrated, and meet AstraZeneca's strategic objectives.
- Effectively communicate complex AI concepts and their clinical implications to diverse audiences, including senior leadership, regulatory agencies, and medical professionals.
- External Engagement & AI Ecosystem Understanding:
- Lead the clinical due diligence and ongoing medical oversight for external AI partnerships and vendor solutions, ensuring they meet AstraZeneca's stringent patient safety, clinical quality, and ethical standards.
- Represent AstraZeneca's Patient Safety perspective on AI in relevant external forums, including interactions with regulatory agencies, academic institutions, and industry working groups.
- Maintain a detailed understanding of the evolving AI ecosystem in healthcare, including emerging technologies, key players, and their potential impact on pharmacovigilance and patient safety.
- Regulatory Compliance & Ethical AI Application:
- Ensure all AI initiatives within Patient Safety comply with global medical regulations, data privacy laws (e.g., GDPR, HIPAA), and ethical guidelines for AI in healthcare.
- Champion the principles of responsible AI, including fairness, transparency, accountability, and privacy, in all Patient Safety AI applications.
Essential Skills/Experience:
- Medical Doctor (MD) or equivalent medical degree.
- Board certification in a relevant medical specialty (e.g., Internal Medicine, Clinical Pharmacology, Preventive Medicine).
- Significant (e.g., 5-7+ years) post-qualification experience in pharmacovigilance, clinical safety, or a closely related medical field within the pharmaceutical industry or a regulatory agency.
- Demonstrated experience in the strategic application, clinical validation, or medical oversight of digital health technologies or AI/ML solutions in a healthcare, clinical research, or pharmaceutical setting.
- Deep understanding of global pharmacovigilance regulations (e.g., FDA, EMA GVP), ICH guidelines, and requirements for the clinical validation of software as a medical device or health IT solutions, including emerging AI/ML guidance for medical applications.
- Proven ability to critically evaluate AI/ML methodologies and outputs from a clinical and patient safety perspective, including assessing for potential biases, clinical utility, and safety implications.
- Strong understanding of AI/ML concepts, general methodologies, and the AI development lifecycle, sufficient to enable effective leadership of, and collaboration with, technical AI teams, data scientists, and technology vendors.
- Exceptional analytical and problem-solving skills, with the ability to translate complex medical and regulatory challenges into strategic AI initiatives.
- Outstanding communication, presentation, and interpersonal skills, with a proven ability to convey complex medical and technical information clearly to diverse audiences, including senior leadership and regulatory bodies.
- Proven leadership in managing complex clinical projects, cross-functional teams, or strategic initiatives in a regulated healthcare environment.
- Detailed understanding of the AI ecosystem in healthcare, including key technologies, vendors, research trends, and ethical considerations.
Desirable Skills/Experience:
- Experience in leading or significantly contributing to regulatory interactions (e.g., with FDA, EMA) regarding digital health tools or AI/ML applications.
- Experience in developing or implementing clinical guidelines, standard operating procedures (SOPs), or best practices for the use of technology in a medical or regulatory context.
- Familiarity with human factors engineering principles and usability assessment for clinical decision support systems or other healthcare technologies.
- Advanced degree (e.g., MPH, MSc, PhD) of other formal training in medical informatics, public health, epidemiology, data science, healthcare administration, or a related field.
- Formal training or certification in medical informatics, data science, or AI in healthcare.
- Published work, presentations, or active participation in industry consortia/working groups related to AI in medicine, digital health, pharmacovigilance, or patient safety.
- Understanding of data governance, data privacy, and cybersecurity considerations relevant to AI in healthcare
Where can I find out more?
- Our Social Media, Follow AstraZeneca on LinkedIn https://www.linkedin.com/company/1603/
- Follow AstraZeneca on Facebook https://www.facebook.com/astrazenecacareers/
- Follow AstraZeneca on Instagram https://www.instagram.com/astrazeneca_careers/?hl=en
- Our US Footprint: Powering Scientific Innovation - YouTube
Total Rewards:
The annual base pay for this position ranges from $241,613.60 to $362,420.40. Hourly and salaried non-exempt employees will also be paid overtime pay when working qualifying overtime hours. Base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. In addition, our positions offer a short-term incentive bonus opportunity; eligibility to participate in our equity-based long-term incentive program (salaried roles), to receive a retirement contribution (hourly roles), and commission payment eligibility (sales roles). Benefits offered included a qualified retirement program [401(k) plan]; paid vacation and holidays; paid leaves; and, health benefits including medical, prescription drug, dental, and vision coverage in accordance with the terms and conditions of the applicable plans. Additional details of participation in these benefit plans will be provided if an employee receives an offer of employment. If hired, employee will be in an “at-will position” and the Company reserves the right to modify base pay (as well as any other discretionary payment or compensation program) at any time, including for reasons related to individual performance, Company or individual department/team performance, and market factors.
AstraZeneca is an equal opportunity employer that is committed to diversity and inclusion and providing a workplace that is free from discrimination. AstraZeneca is committed to accommodating persons with disabilities. Such accommodation is available on request in respect of all aspects of the recruitment, assessment and selection process and may be requested by emailing AZCHumanResources@astrazeneca.com.
#LI-Hybrid
Date Posted
14-Oct-2025Closing Date
27-Oct-2025Our mission is to build an inclusive environment where equal employment opportunities are available to all applicants and employees. In furtherance of that mission, we welcome and consider applications from all qualified candidates, regardless of their protected characteristics. If you have a disability or special need that requires accommodation, please complete the corresponding section in the application form.
AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorisation and employment eligibility verification requirements.